Learning Vector Quantization with Training Count (LVQTC)

نویسنده

  • Roberto Odorico
چکیده

Kohonen's learning vector quantization (LVQ) is modified by attributing training counters to each neuron, which record its training statistics. During training, this allows for dynamic self-allocation of the neurons to classes. In the classification stage training counters provide an estimate of the reliability of classification of the single neurons, which can be exploited to obtain a substantially higher purity of classification. The method turns out to be especially valuable in the presence of considerable overlaps among class distributions in the pattern space. The results of a typical application to high energy elementary particle physics are discussed in detail. Copyright 1997 Elsevier Science Ltd.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

DFUB 95/16 NEURAL 2.00 A Program for Neural Net and Statistical Pattern Recognition*

A neural net program for pattern classification is presented, which includes: i) an improved version of Kohonen's Learning Vector Quantization (LVQ with Training Count); ii) Feed-Forward Neural Networks with Back-Propagation training; iii) Gaussian (or Mahalanobis distance) classification; iv) Fisher linear discrimination. Back-Prop trainings with emulations of Intel's ETANN and Siemens' MA16 n...

متن کامل

Stationarity of Matrix Relevance Learning Vector Quantization

We investigate the convergence properties of heuristic matrix relevance updates in Learning Vector Quantization. Under mild assumptions on the training process, stationarity conditions can be worked out which characterize the outcome of training in terms of the relevance matrix. It is shown that the original training schemes single out one specific direction in feature space which depends on th...

متن کامل

Image Compression Based on a Novel Fuzzy Learning Vector Quantization Algorithm

We introduce a novel fuzzy learning vector quantization algorithm for image compression. The design procedure of this algorithm encompasses two basic issues. Firstly, a modified objective function of the fuzzy c-means algorithm is reformulated and then is minimized by means of an iterative gradient-descent procedure. Secondly, the training procedure is equipped with a systematic strategy to acc...

متن کامل

Learning vector quantization: The dynamics of winner-takes-all algorithms

Winner-Takes-All (WTA) prescriptions for Learning Vector Quantization (LVQ) are studied in the framework of a model situation: Two competing prototype vectors are updated according to a sequence of example data drawn from a mixture of Gaussians. The theory of on-line learning allows for an exact mathematical description of the training dynamics, even if an underlying cost function cannot be ide...

متن کامل

A new two-step learning vector quantization algorithm for image compression

The learning vector quantization (LVQ) algorithm is widely used in image compression because of its intuitively clear learning process and simple implementation. However, LVQ strongly depends on the initialization of the codebook and often converges to local optimal results. To address the issues, a new two-step LVQ (TsLVQ) algorithm is proposed in the paper. TsLVQ uses a correcting learning st...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Neural networks : the official journal of the International Neural Network Society

دوره 10 6  شماره 

صفحات  -

تاریخ انتشار 1997